首页> 外文OA文献 >A Unified Gradient Regularization Family for Adversarial Examples
【2h】

A Unified Gradient Regularization Family for Adversarial Examples

机译:用于对抗性实例的统一梯度正则化族

代理获取
本网站仅为用户提供外文OA文献查询和代理获取服务,本网站没有原文。下单后我们将采用程序或人工为您竭诚获取高质量的原文,但由于OA文献来源多样且变更频繁,仍可能出现获取不到、文献不完整或与标题不符等情况,如果获取不到我们将提供退款服务。请知悉。

摘要

Adversarial examples are augmented data points generated by imperceptibleperturbation of input samples. They have recently drawn much attention with themachine learning and data mining community. Being difficult to distinguish fromreal examples, such adversarial examples could change the prediction of many ofthe best learning models including the state-of-the-art deep learning models.Recent attempts have been made to build robust models that take into accountadversarial examples. However, these methods can either lead to performancedrops or lack mathematical motivations. In this paper, we propose a unifiedframework to build robust machine learning models against adversarial examples.More specifically, using the unified framework, we develop a family of gradientregularization methods that effectively penalize the gradient of loss functionw.r.t. inputs. Our proposed framework is appealing in that it offers a unifiedview to deal with adversarial examples. It incorporates anotherrecently-proposed perturbation based approach as a special case. In addition,we present some visual effects that reveals semantic meaning in thoseperturbations, and thus support our regularization method and provide anotherexplanation for generalizability of adversarial examples. By applying thistechnique to Maxout networks, we conduct a series of experiments and achieveencouraging results on two benchmark datasets. In particular,we attain the bestaccuracy on MNIST data (without data augmentation) and competitive performanceon CIFAR-10 data.
机译:对抗性示例是由输入样本的不可感知的扰动产生的扩充数据点。他们最近在机器学习和数据挖掘社区引起了很多关注。这些对抗性示例难以与真实示例区分开,可能会改变许多最佳学习模型的预测,包括最新的深度学习模型。最近人们尝试构建考虑了对抗性示例的健壮模型。但是,这些方法可能导致性能下降或缺乏数学动机。在本文中,我们提出了一个统一的框架来针对对抗性示例构建健壮的机器学习模型。更具体地说,使用该统一框架,我们开发了一系列梯度调节方法,可以有效地补偿损失函数的梯度。输入。我们提出的框架很吸引人,因为它提供了一个统一的视图来处理对抗性示例。它结合了另一种最近提出的基于扰动的方法作为特例。另外,我们提出了一些视觉效果,这些效果揭示了这些扰动中的语义含义,从而支持了我们的正则化方法,并为对抗性示例的一般化提供了另一种解释。通过将此技术应用于Maxout网络,我们进行了一系列实验,并在两个基准数据集上获得了令人鼓舞的结果。特别是,我们在MNIST数据(不增加数据)上获得了最佳准确性,并且在CIFAR-10数据上获得了竞争优势。

著录项

相似文献

  • 外文文献
  • 中文文献
  • 专利
代理获取

客服邮箱:kefu@zhangqiaokeyan.com

京公网安备:11010802029741号 ICP备案号:京ICP备15016152号-6 六维联合信息科技 (北京) 有限公司©版权所有
  • 客服微信

  • 服务号